Generalized Notation Notation (GNN) Pipeline Output Summary

Table of Contents

GNN Discovery (Step 1)

GNN File Discovery Report

Processed 2 GNN file(s) from directory: src/gnn/examples Search pattern used: **/*.md

Overall Summary


Detailed File Analysis

File: src/gnn/examples/pymdp_pomdp_agent.md

Found Sections:


File: src/gnn/examples/rxinfer_multiagent_gnn.md

Found Sections:


GNN Type Checker (Step 4)

Type Check Report

GNN Type Checker Report

pymdp_pomdp_agent.md: ✅ VALID

Path: src/gnn/examples/pymdp_pomdp_agent.md

rxinfer_multiagent_gnn.md: ✅ VALID

Path: src/gnn/examples/rxinfer_multiagent_gnn.md

Checked 2 files, 2 valid, 0 invalid

Resource Estimates: resource_estimates

Images

Markdown Reports

resource_report.md

GNN Resource Estimation Report

Analyzed 2 files Average Memory Usage: 0.50 KB Average Inference Time: 218.62 units Average Storage: 5.29 KB

pymdp_pomdp_agent.md

Path: src/gnn/examples/pymdp_pomdp_agent.md Memory Estimate: 0.48 KB Inference Estimate: 154.07 units Storage Estimate: 3.83 KB

Model Info

  • variables_count: 21
  • edges_count: 2
  • time_spec: Dynamic
  • equation_count: 5

Complexity Metrics

  • state_space_complexity: 6.9658
  • graph_density: 0.0048
  • avg_in_degree: 1.0000
  • avg_out_degree: 1.0000
  • max_in_degree: 1.0000
  • max_out_degree: 1.0000
  • cyclic_complexity: 0.0000
  • temporal_complexity: 0.0000
  • equation_complexity: 8.7600
  • overall_complexity: 8.7413
  • variable_count: 21.0000
  • edge_count: 2.0000
  • total_state_space_dim: 124.0000
  • max_variable_dim: 27.0000

rxinfer_multiagent_gnn.md

Path: src/gnn/examples/rxinfer_multiagent_gnn.md Memory Estimate: 0.52 KB Inference Estimate: 283.16 units Storage Estimate: 6.76 KB

Model Info

  • variables_count: 60
  • edges_count: 1
  • time_spec: Dynamic
  • equation_count: 15

Complexity Metrics

  • state_space_complexity: 6.8202
  • graph_density: 0.0003
  • avg_in_degree: 1.0000
  • avg_out_degree: 1.0000
  • max_in_degree: 1.0000
  • max_out_degree: 1.0000
  • cyclic_complexity: 0.0000
  • temporal_complexity: 0.0000
  • equation_complexity: 3.2578
  • overall_complexity: 5.3649
  • variable_count: 60.0000
  • edge_count: 1.0000
  • total_state_space_dim: 112.0000
  • max_variable_dim: 16.0000

Metric Definitions

General Metrics

  • Memory Estimate (KB): Estimated RAM required to hold the model's variables and data structures in memory. Calculated based on variable dimensions and data types (e.g., float: 4 bytes, int: 4 bytes).
  • Inference Estimate (units): A relative, abstract measure of computational cost for a single inference pass. It is derived from factors like model type (Static, Dynamic, Hierarchical), the number and type of variables, the complexity of connections (edges), and the operations defined in equations. Higher values indicate a more computationally intensive model. These units are not tied to a specific hardware time (e.g., milliseconds) but allow for comparison between different GNN models.
  • Storage Estimate (KB): Estimated disk space required to store the model file. This includes the memory footprint of the data plus overhead for the GNN textual representation, metadata, comments, and equations.

Complexity Metrics (scores are generally relative; higher often means more complex)

  • state_space_complexity: Logarithmic measure of the total dimensionality of all variables (sum of the product of dimensions for each variable). Represents the model's theoretical information capacity or the size of its state space.
  • graph_density: Ratio of actual edges to the maximum possible edges in the model graph. A value of 0 indicates no connections, while 1 would mean a fully connected graph. Measures how interconnected the variables are.
  • avg_in_degree: Average number of incoming connections (edges) per variable.
  • avg_out_degree: Average number of outgoing connections (edges) per variable.
  • max_in_degree: Maximum number of incoming connections for any single variable in the model.
  • max_out_degree: Maximum number of outgoing connections for any single variable in the model.
  • cyclic_complexity: A score indicating the presence and extent of cyclic patterns or feedback loops in the graph. Approximated based on the ratio of edges to variables; higher values suggest more complex recurrent interactions.
  • temporal_complexity: Proportion of edges that involve time dependencies (e.g., connecting a variable at time t to one at t+1). Indicates the degree to which the model's behavior depends on past states or sequences.
  • equation_complexity: A measure based on the average length, number, and types of mathematical operators (e.g., +, *, log, softmax) used in the model's equations. Higher values suggest more intricate mathematical relationships between variables.
  • overall_complexity: A weighted composite score (typically scaled, e.g., 0-10) that combines state space size, graph structure (density, cyclicity), temporal aspects, and equation complexity to provide a single, holistic measure of the model's intricacy.

HTML Reports/Outputs

resource_report_detailed.html

View standalone: resource_report_detailed.html

JSON Files

resource_data.json

{
  "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md": {
    "file": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md",
    "model_name": "Multifactor PyMDP Agent v1",
    "memory_estimate": 0.484375,
    "inference_estimate": 154.06988264859797,
    "storage_estimate": 3.82846875,
    "flops_estimate": {
      "total_flops": 1050.0,
      "matrix_operations": 0,
      "element_operations": 0,
      "nonlinear_operations": 0
    },
    "inference_time_estimate": {
      "cpu_time_seconds": 2.1e-08,
      "cpu_time_ms": 2.1e-05,
      "cpu_time_us": 0.020999999999999998
    },
    "batched_inference_estimate": {
      "batch_1": {
        "flops": 1050.0,
        "time_seconds": 2.1e-08,
        "throughput_per_second": 47619047.61904762
      },
      "batch_8": {
        "flops": 6674.971489500035,
        "time_seconds": 1.334994297900007e-07,
        "throughput_per_second": 59925349.58826627
      },
      "batch_32": {
        "flops": 25518.25782075925,
        "time_seconds": 5.10365156415185e-07,
        "throughput_per_second": 62700205.13306323
      },
      "batch_128": {
        "flops": 99830.77636640746,
        "time_seconds": 1.9966155273281492e-06,
        "throughput_per_second": 64108486.710652955
      },
      "batch_512": {
        "flops": 394234.3967437306,
        "time_seconds": 7.884687934874611e-06,
        "throughput_per_second": 64935987.85760216
      }
    },
    "model_overhead": {
      "compilation_ms": 79,
      "optimization_ms": 240.5,
      "memory_overhead_kb": 2.572265625
    },
    "complexity": {
      "state_space_complexity": 6.965784284662087,
      "graph_density": 0.004761904761904762,
      "avg_in_degree": 1.0,
      "avg_out_degree": 1.0,
      "max_in_degree": 1,
      "max_out_degree": 1,
      "cyclic_complexity": 0,
      "temporal_complexity": 0.0,
      "equation_complexity": 8.76,
      "overall_complexity": 8.741273094711996,
      "variable_count": 21,
      "edge_count": 2,
      "total_state_space_dim": 124,
      "max_variable_dim": 27
    },
    "model_info": {
      "variables_count": 21,
      "edges_count": 2,
      "time_spec": "Dynamic",
      "equation_count": 5
    }
  },
  "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md": {
    "file": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md",
    "model_name": "Multi-agent Trajectory Planning",
    "memory_estimate": 0.5166015625,
    "inference_estimate": 283.1611446514433,
    "storage_estimate": 6.7573515625,
    "flops_estimate": {
      "total_flops": 20.0,
      "matrix_operations": 0,
      "element_operations": 8,
      "nonlinear_operations": 0
    },
    "inference_time_estimate": {
      "cpu_time_seconds": 4e-10,
      "cpu_time_ms": 4.0000000000000003e-07,
      "cpu_time_us": 0.0004
    },
    "batched_inference_estimate": {
      "batch_1": {
        "flops": 20.0,
        "time_seconds": 4e-10,
        "throughput_per_second": 2500000000.0
      },
      "batch_8": {
        "flops": 127.14231408571496,
        "time_seconds": 2.5428462817142993e-09,
        "throughput_per_second": 3146080853.383979
      },
      "batch_32": {
        "flops": 486.0620537287476,
        "time_seconds": 9.721241074574952e-09,
        "throughput_per_second": 3291760769.48582
      },
      "batch_128": {
        "flops": 1901.5385974553803,
        "time_seconds": 3.8030771949107605e-08,
        "throughput_per_second": 3365695552.30928
      },
      "batch_512": {
        "flops": 7509.226604642487,
        "time_seconds": 1.5018453209284973e-07,
        "throughput_per_second": 3409139362.5241137
      }
    },
    "model_overhead": {
      "compilation_ms": 206,
      "optimization_ms": 1820.0,
      "memory_overhead_kb": 5.423828125
    },
    "complexity": {
      "state_space_complexity": 6.820178962415188,
      "graph_density": 0.0002824858757062147,
      "avg_in_degree": 1.0,
      "avg_out_degree": 1.0,
      "max_in_degree": 1,
      "max_out_degree": 1,
      "cyclic_complexity": 0,
      "temporal_complexity": 0.0,
      "equation_complexity": 3.2577777777777777,
      "overall_complexity": 5.364897390812113,
      "variable_count": 60,
      "edge_count": 1,
      "total_state_space_dim": 112,
      "max_variable_dim": 16
    },
    "model_info": {
      "variables_count": 60,
      "edges_count": 1,
      "time_spec": "Dynamic",
      "equation_count": 15
    }
  }
}
resource_data.json

GNN Exports (Step 5)

Export Step Report

📤 GNN Export Step Summary

🗓️ Generated: 2025-06-06 13:10:58

⚙️ Configuration

📊 Export Statistics

Exports for pymdp_pomdp_agent: pymdp_pomdp_agent

JSON Files

pymdp_pomdp_agent.json

{
  "file_path": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md",
  "name": "Multifactor PyMDP Agent v1",
  "metadata": {
    "description": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example."
  },
  "states": [
    {
      "id": "A_m0",
      "dimensions": "3,2,3,type=float",
      "original_id": "A_m0"
    },
    {
      "id": "A_m1",
      "dimensions": "3,2,3,type=float",
      "original_id": "A_m1"
    },
    {
      "id": "A_m2",
      "dimensions": "3,2,3,type=float",
      "original_id": "A_m2"
    },
    {
      "id": "B_f0",
      "dimensions": "2,2,1,type=float",
      "original_id": "B_f0"
    },
    {
      "id": "B_f1",
      "dimensions": "3,3,3,type=float",
      "original_id": "B_f1"
    },
    {
      "id": "C_m0",
      "dimensions": "3,type=float",
      "original_id": "C_m0"
    },
    {
      "id": "C_m1",
      "dimensions": "3,type=float",
      "original_id": "C_m1"
    },
    {
      "id": "C_m2",
      "dimensions": "3,type=float",
      "original_id": "C_m2"
    },
    {
      "id": "D_f0",
      "dimensions": "2,type=float",
      "original_id": "D_f0"
    },
    {
      "id": "D_f1",
      "dimensions": "3,type=float",
      "original_id": "D_f1"
    },
    {
      "id": "s_f0",
      "dimensions": "2,1,type=float",
      "original_id": "s_f0"
    },
    {
      "id": "s_f1",
      "dimensions": "3,1,type=float",
      "original_id": "s_f1"
    },
    {
      "id": "s_prime_f0",
      "dimensions": "2,1,type=float",
      "original_id": "s_prime_f0"
    },
    {
      "id": "s_prime_f1",
      "dimensions": "3,1,type=float",
      "original_id": "s_prime_f1"
    },
    {
      "id": "o_m0",
      "dimensions": "3,1,type=float",
      "original_id": "o_m0"
    },
    {
      "id": "o_m1",
      "dimensions": "3,1,type=float",
      "original_id": "o_m1"
    },
    {
      "id": "o_m2",
      "dimensions": "3,1,type=float",
      "original_id": "o_m2"
    },
    {
      "id": "u_f1",
      "dimensions": "1,type=int",
      "original_id": "u_f1"
    },
    {
      "id": "G",
      "dimensions": "1,type=float",
      "original_id": "G"
    },
    {
      "id": "t",
      "dimensions": "1,type=int",
      "original_id": "t"
    }
  ],
  "parameters": {},
  "initial_parameters": {},
  "observations": [],
  "transitions": [
    {
      "sources": [
        "D_f0",
        "D_f1"
      ],
      "operator": "-",
      "targets": [
        "s_f0",
        "s_f1"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "s_f0",
        "s_f1"
      ],
      "operator": "-",
      "targets": [
        "A_m0",
        "A_m1",
        "A_m2"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "A_m0",
        "A_m1",
        "A_m2"
      ],
      "operator": "-",
      "targets": [
        "o_m0",
        "o_m1",
        "o_m2"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "B_f0",
        "B_f1"
      ],
      "operator": "-",
      "targets": [
        "s_prime_f0",
        "s_prime_f1"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "C_m0",
        "C_m1",
        "C_m2"
      ],
      "operator": ">",
      "targets": [
        "G"
      ],
      "attributes": {}
    }
  ],
  "ontology_annotations": {
    "A_m0": "LikelihoodMatrixModality0",
    "A_m1": "LikelihoodMatrixModality1",
    "A_m2": "LikelihoodMatrixModality2",
    "B_f0": "TransitionMatrixFactor0",
    "B_f1": "TransitionMatrixFactor1",
    "C_m0": "LogPreferenceVectorModality0",
    "C_m1": "LogPreferenceVectorModality1",
    "C_m2": "LogPreferenceVectorModality2",
    "D_f0": "PriorOverHiddenStatesFactor0",
    "D_f1": "PriorOverHiddenStatesFactor1",
    "s_f0": "HiddenStateFactor0",
    "s_f1": "HiddenStateFactor1",
    "s_prime_f0": "NextHiddenStateFactor0",
    "s_prime_f1": "NextHiddenStateFactor1",
    "o_m0": "ObservationModality0",
    "o_m1": "ObservationModality1",
    "o_m2": "ObservationModality2",
    "\u03c0_f1": "PolicyVectorFactor1 # Distribution over actions for factor 1",
    "u_f1": "ActionFactor1       # Chosen action for factor 1",
    "G": "ExpectedFreeEnergy"
  },
  "equations_text": "",
  "time_info": {
    "DiscreteTime": "t",
    "ModelTimeHorizon": "Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon."
  },
  "footer_text": "",
  "signature": {},
  "raw_sections": {
    "GNNSection": "MultifactorPyMDPAgent",
    "GNNVersionAndFlags": "GNN v1",
    "ModelName": "Multifactor PyMDP Agent v1",
    "ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
    "StateSpaceBlock": "# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]\nA_m0[3,2,3,type=float]   # Likelihood for modality 0 (\"state_observation\")\nA_m1[3,2,3,type=float]   # Likelihood for modality 1 (\"reward\")\nA_m2[3,2,3,type=float]   # Likelihood for modality 2 (\"decision_proprioceptive\")\n\n# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]\nB_f0[2,2,1,type=float]   # Transitions for factor 0 (\"reward_level\"), 1 implicit action (uncontrolled)\nB_f1[3,3,3,type=float]   # Transitions for factor 1 (\"decision_state\"), 3 actions\n\n# C_vectors are defined per modality: C_m[observation_outcomes]\nC_m0[3,type=float]       # Preferences for modality 0\nC_m1[3,type=float]       # Preferences for modality 1\nC_m2[3,type=float]       # Preferences for modality 2\n\n# D_vectors are defined per hidden state factor: D_f[states]\nD_f0[2,type=float]       # Prior for factor 0\nD_f1[3,type=float]       # Prior for factor 1\n\n# Hidden States\ns_f0[2,1,type=float]     # Hidden state for factor 0 (\"reward_level\")\ns_f1[3,1,type=float]     # Hidden state for factor 1 (\"decision_state\")\ns_prime_f0[2,1,type=float] # Next hidden state for factor 0\ns_prime_f1[3,1,type=float] # Next hidden state for factor 1\n\n# Observations\no_m0[3,1,type=float]     # Observation for modality 0\no_m1[3,1,type=float]     # Observation for modality 1\no_m2[3,1,type=float]     # Observation for modality 2\n\n# Policy and Control\n\u03c0_f1[3,type=float]       # Policy (distribution over actions) for controllable factor 1\nu_f1[1,type=int]         # Action taken for controllable factor 1\nG[1,type=float]          # Expected Free Energy (overall, or can be per policy)\nt[1,type=int]            # Time step",
    "Connections": "(D_f0,D_f1)-(s_f0,s_f1)\n(s_f0,s_f1)-(A_m0,A_m1,A_m2)\n(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)\n(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled\n(B_f0,B_f1)-(s_prime_f0,s_prime_f1)\n(C_m0,C_m1,C_m2)>G\nG>\u03c0_f1\n\u03c0_f1-u_f1\nG=ExpectedFreeEnergy\nt=Time",
    "InitialParameterization": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n  ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ),  # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n  ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ),  # obs=1\n  ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) )   # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n  ( (0.0,0.731,0.0), (0.0,0.269,0.0) ),  # obs=0\n  ( (0.0,0.269,0.0), (0.0,0.731,0.0) ),  # obs=1\n  ( (1.0,0.0,1.0), (1.0,0.0,1.0) )      # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n  ( (1.0,0.0,0.0), (1.0,0.0,0.0) ),  # obs=0\n  ( (0.0,1.0,0.0), (0.0,1.0,0.0) ),  # obs=1\n  ( (0.0,0.0,1.0), (0.0,0.0,1.0) )   # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n  ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n  ( (0.0),(1.0) )  # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n  ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n  ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n  ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) )  # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
    "InitialParameterization_raw_content": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n  ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ),  # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n  ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ),  # obs=1\n  ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) )   # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n  ( (0.0,0.731,0.0), (0.0,0.269,0.0) ),  # obs=0\n  ( (0.0,0.269,0.0), (0.0,0.731,0.0) ),  # obs=1\n  ( (1.0,0.0,1.0), (1.0,0.0,1.0) )      # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n  ( (1.0,0.0,0.0), (1.0,0.0,0.0) ),  # obs=0\n  ( (0.0,1.0,0.0), (0.0,1.0,0.0) ),  # obs=1\n  ( (0.0,0.0,1.0), (0.0,0.0,1.0) )   # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n  ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n  ( (0.0),(1.0) )  # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n  ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n  ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n  ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) )  # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
    "Equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
    "Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
    "ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1       # Chosen action for factor 1\nG=ExpectedFreeEnergy",
    "ModelParameters": "num_hidden_states_factors: [2, 3]  # s_f0[2], s_f1[3]\nnum_obs_modalities: [3, 3, 3]     # o_m0[3], o_m1[3], o_m2[3]\nnum_control_factors: [1, 3]   # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)",
    "Footer": "Multifactor PyMDP Agent v1 - GNN Representation",
    "Signature": "NA"
  },
  "other_sections": {},
  "gnnsection": {},
  "gnnversionandflags": {},
  "equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
  "ModelParameters": {
    "num_hidden_states_factors": "[2, 3]",
    "num_obs_modalities": "[3, 3, 3]",
    "num_control_factors": "[1, 3]"
  },
  "num_hidden_states_factors": "[2, 3]",
  "num_obs_modalities": "[3, 3, 3]",
  "num_control_factors": "[1, 3]",
  "footer": "Multifactor PyMDP Agent v1 - GNN Representation"
}
pymdp_pomdp_agent.json

Text/Log Files

pymdp_pomdp_agent.txt

GNN Model Summary: Multifactor PyMDP Agent v1
Source File: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md

Metadata:
  description: This model represents a PyMDP agent with multiple observation modalities and hidden state factors.
- Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes)
- Hidden state factors: "reward_level" (2 states), "decision_state" (3 states)
- Control: "decision_state" factor is controllable with 3 possible actions.
The parameterization is derived from a PyMDP Python script example.

States (20):
  - ID: A_m0 (dimensions=3,2,3,type=float, original_id=A_m0)
  - ID: A_m1 (dimensions=3,2,3,type=float, original_id=A_m1)
  - ID: A_m2 (dimensions=3,2,3,type=float, original_id=A_m2)
  - ID: B_f0 (dimensions=2,2,1,type=float, original_id=B_f0)
  - ID: B_f1 (dimensions=3,3,3,type=float, original_id=B_f1)
  - ID: C_m0 (dimensions=3,type=float, original_id=C_m0)
  - ID: C_m1 (dimensions=3,type=float, original_id=C_m1)
  - ID: C_m2 (dimensions=3,type=float, original_id=C_m2)
  - ID: D_f0 (dimensions=2,type=float, original_id=D_f0)
  - ID: D_f1 (dimensions=3,type=float, original_id=D_f1)
  - ID: s_f0 (dimensions=2,1,type=float, original_id=s_f0)
  - ID: s_f1 (dimensions=3,1,type=float, original_id=s_f1)
  - ID: s_prime_f0 (dimensions=2,1,type=float, original_id=s_prime_f0)
  - ID: s_prime_f1 (dimensions=3,1,type=float, original_id=s_prime_f1)
  - ID: o_m0 (dimensions=3,1,type=float, original_id=o_m0)
  - ID: o_m1 (dimensions=3,1,type=float, original_id=o_m1)
  - ID: o_m2 (dimensions=3,1,type=float, original_id=o_m2)
  - ID: u_f1 (dimensions=1,type=int, original_id=u_f1)
  - ID: G (dimensions=1,type=float, original_id=G)
  - ID: t (dimensions=1,type=int, original_id=t)

Initial Parameters (0):

General Parameters (0):

Observations (0):

Transitions (5):
  - None -> None
  - None -> None
  - None -> None
  - None -> None
  - None -> None

Ontology Annotations (20):
  A_m0 = LikelihoodMatrixModality0
  A_m1 = LikelihoodMatrixModality1
  A_m2 = LikelihoodMatrixModality2
  B_f0 = TransitionMatrixFactor0
  B_f1 = TransitionMatrixFactor1
  C_m0 = LogPreferenceVectorModality0
  C_m1 = LogPreferenceVectorModality1
  C_m2 = LogPreferenceVectorModality2
  D_f0 = PriorOverHiddenStatesFactor0
  D_f1 = PriorOverHiddenStatesFactor1
  s_f0 = HiddenStateFactor0
  s_f1 = HiddenStateFactor1
  s_prime_f0 = NextHiddenStateFactor0
  s_prime_f1 = NextHiddenStateFactor1
  o_m0 = ObservationModality0
  o_m1 = ObservationModality1
  o_m2 = ObservationModality2
  π_f1 = PolicyVectorFactor1 # Distribution over actions for factor 1
  u_f1 = ActionFactor1       # Chosen action for factor 1
  G = ExpectedFreeEnergy

pymdp_pomdp_agent.txt

Other Files

Exports for rxinfer_multiagent_gnn: rxinfer_multiagent_gnn

JSON Files

rxinfer_multiagent_gnn.json

{
  "file_path": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md",
  "name": "Multi-agent Trajectory Planning",
  "metadata": {
    "description": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles."
  },
  "states": [
    {
      "id": "dt",
      "dimensions": "1,type=float",
      "original_id": "dt"
    },
    {
      "id": "gamma",
      "dimensions": "1,type=float",
      "original_id": "gamma"
    },
    {
      "id": "nr_steps",
      "dimensions": "1,type=int",
      "original_id": "nr_steps"
    },
    {
      "id": "nr_iterations",
      "dimensions": "1,type=int",
      "original_id": "nr_iterations"
    },
    {
      "id": "nr_agents",
      "dimensions": "1,type=int",
      "original_id": "nr_agents"
    },
    {
      "id": "softmin_temperature",
      "dimensions": "1,type=float",
      "original_id": "softmin_temperature"
    },
    {
      "id": "intermediate_steps",
      "dimensions": "1,type=int",
      "original_id": "intermediate_steps"
    },
    {
      "id": "save_intermediates",
      "dimensions": "1,type=bool",
      "original_id": "save_intermediates"
    },
    {
      "id": "A",
      "dimensions": "4,4,type=float",
      "original_id": "A"
    },
    {
      "id": "B",
      "dimensions": "4,2,type=float",
      "original_id": "B"
    },
    {
      "id": "C",
      "dimensions": "2,4,type=float",
      "original_id": "C"
    },
    {
      "id": "initial_state_variance",
      "dimensions": "1,type=float",
      "original_id": "initial_state_variance"
    },
    {
      "id": "control_variance",
      "dimensions": "1,type=float",
      "original_id": "control_variance"
    },
    {
      "id": "goal_constraint_variance",
      "dimensions": "1,type=float",
      "original_id": "goal_constraint_variance"
    },
    {
      "id": "gamma_shape",
      "dimensions": "1,type=float",
      "original_id": "gamma_shape"
    },
    {
      "id": "gamma_scale_factor",
      "dimensions": "1,type=float",
      "original_id": "gamma_scale_factor"
    },
    {
      "id": "x_limits",
      "dimensions": "2,type=float",
      "original_id": "x_limits"
    },
    {
      "id": "y_limits",
      "dimensions": "2,type=float",
      "original_id": "y_limits"
    },
    {
      "id": "fps",
      "dimensions": "1,type=int",
      "original_id": "fps"
    },
    {
      "id": "heatmap_resolution",
      "dimensions": "1,type=int",
      "original_id": "heatmap_resolution"
    },
    {
      "id": "plot_width",
      "dimensions": "1,type=int",
      "original_id": "plot_width"
    },
    {
      "id": "plot_height",
      "dimensions": "1,type=int",
      "original_id": "plot_height"
    },
    {
      "id": "agent_alpha",
      "dimensions": "1,type=float",
      "original_id": "agent_alpha"
    },
    {
      "id": "target_alpha",
      "dimensions": "1,type=float",
      "original_id": "target_alpha"
    },
    {
      "id": "color_palette",
      "dimensions": "1,type=string",
      "original_id": "color_palette"
    },
    {
      "id": "door_obstacle_center_1",
      "dimensions": "2,type=float",
      "original_id": "door_obstacle_center_1"
    },
    {
      "id": "door_obstacle_size_1",
      "dimensions": "2,type=float",
      "original_id": "door_obstacle_size_1"
    },
    {
      "id": "door_obstacle_center_2",
      "dimensions": "2,type=float",
      "original_id": "door_obstacle_center_2"
    },
    {
      "id": "door_obstacle_size_2",
      "dimensions": "2,type=float",
      "original_id": "door_obstacle_size_2"
    },
    {
      "id": "wall_obstacle_center",
      "dimensions": "2,type=float",
      "original_id": "wall_obstacle_center"
    },
    {
      "id": "wall_obstacle_size",
      "dimensions": "2,type=float",
      "original_id": "wall_obstacle_size"
    },
    {
      "id": "combined_obstacle_center_1",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_center_1"
    },
    {
      "id": "combined_obstacle_size_1",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_size_1"
    },
    {
      "id": "combined_obstacle_center_2",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_center_2"
    },
    {
      "id": "combined_obstacle_size_2",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_size_2"
    },
    {
      "id": "combined_obstacle_center_3",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_center_3"
    },
    {
      "id": "combined_obstacle_size_3",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_size_3"
    },
    {
      "id": "agent1_id",
      "dimensions": "1,type=int",
      "original_id": "agent1_id"
    },
    {
      "id": "agent1_radius",
      "dimensions": "1,type=float",
      "original_id": "agent1_radius"
    },
    {
      "id": "agent1_initial_position",
      "dimensions": "2,type=float",
      "original_id": "agent1_initial_position"
    },
    {
      "id": "agent1_target_position",
      "dimensions": "2,type=float",
      "original_id": "agent1_target_position"
    },
    {
      "id": "agent2_id",
      "dimensions": "1,type=int",
      "original_id": "agent2_id"
    },
    {
      "id": "agent2_radius",
      "dimensions": "1,type=float",
      "original_id": "agent2_radius"
    },
    {
      "id": "agent2_initial_position",
      "dimensions": "2,type=float",
      "original_id": "agent2_initial_position"
    },
    {
      "id": "agent2_target_position",
      "dimensions": "2,type=float",
      "original_id": "agent2_target_position"
    },
    {
      "id": "agent3_id",
      "dimensions": "1,type=int",
      "original_id": "agent3_id"
    },
    {
      "id": "agent3_radius",
      "dimensions": "1,type=float",
      "original_id": "agent3_radius"
    },
    {
      "id": "agent3_initial_position",
      "dimensions": "2,type=float",
      "original_id": "agent3_initial_position"
    },
    {
      "id": "agent3_target_position",
      "dimensions": "2,type=float",
      "original_id": "agent3_target_position"
    },
    {
      "id": "agent4_id",
      "dimensions": "1,type=int",
      "original_id": "agent4_id"
    },
    {
      "id": "agent4_radius",
      "dimensions": "1,type=float",
      "original_id": "agent4_radius"
    },
    {
      "id": "agent4_initial_position",
      "dimensions": "2,type=float",
      "original_id": "agent4_initial_position"
    },
    {
      "id": "agent4_target_position",
      "dimensions": "2,type=float",
      "original_id": "agent4_target_position"
    },
    {
      "id": "experiment_seeds",
      "dimensions": "2,type=int",
      "original_id": "experiment_seeds"
    },
    {
      "id": "results_dir",
      "dimensions": "1,type=string",
      "original_id": "results_dir"
    },
    {
      "id": "animation_template",
      "dimensions": "1,type=string",
      "original_id": "animation_template"
    },
    {
      "id": "control_vis_filename",
      "dimensions": "1,type=string",
      "original_id": "control_vis_filename"
    },
    {
      "id": "obstacle_distance_filename",
      "dimensions": "1,type=string",
      "original_id": "obstacle_distance_filename"
    },
    {
      "id": "path_uncertainty_filename",
      "dimensions": "1,type=string",
      "original_id": "path_uncertainty_filename"
    },
    {
      "id": "convergence_filename",
      "dimensions": "1,type=string",
      "original_id": "convergence_filename"
    }
  ],
  "parameters": {},
  "initial_parameters": {},
  "observations": [],
  "transitions": [
    {
      "sources": [
        "dt"
      ],
      "operator": ">",
      "targets": [
        "A"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "A",
        "B",
        "C"
      ],
      "operator": ">",
      "targets": [
        "state_space_model"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "state_space_model",
        "initial_state_variance",
        "control_variance"
      ],
      "operator": ">",
      "targets": [
        "agent_trajectories"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "agent_trajectories",
        "goal_constraint_variance"
      ],
      "operator": ">",
      "targets": [
        "goal_directed_behavior"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "agent_trajectories",
        "gamma",
        "gamma_shape",
        "gamma_scale_factor"
      ],
      "operator": ">",
      "targets": [
        "obstacle_avoidance"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "agent_trajectories",
        "nr_agents"
      ],
      "operator": ">",
      "targets": [
        "collision_avoidance"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "goal_directed_behavior",
        "obstacle_avoidance",
        "collision_avoidance"
      ],
      "operator": ">",
      "targets": [
        "planning_system"
      ],
      "attributes": {}
    }
  ],
  "ontology_annotations": {
    "dt": "TimeStep",
    "gamma": "ConstraintParameter",
    "nr_steps": "TrajectoryLength",
    "nr_iterations": "InferenceIterations",
    "nr_agents": "NumberOfAgents",
    "softmin_temperature": "SoftminTemperature",
    "A": "StateTransitionMatrix",
    "B": "ControlInputMatrix",
    "C": "ObservationMatrix",
    "initial_state_variance": "InitialStateVariance",
    "control_variance": "ControlVariance",
    "goal_constraint_variance": "GoalConstraintVariance"
  },
  "equations_text": "",
  "time_info": {
    "ModelTimeHorizon": "nr_steps"
  },
  "footer_text": "",
  "signature": {
    "Creator": "AI Assistant for GNN",
    "Date": "2024-07-27",
    "Status": "Example for RxInfer.jl multi-agent trajectory planning"
  },
  "raw_sections": {
    "GNNSection": "RxInferMultiAgentTrajectoryPlanning",
    "GNNVersionAndFlags": "GNN v1",
    "ModelName": "Multi-agent Trajectory Planning",
    "ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
    "StateSpaceBlock": "# Model parameters\ndt[1,type=float]               # Time step for the state space model\ngamma[1,type=float]            # Constraint parameter for the Halfspace node\nnr_steps[1,type=int]           # Number of time steps in the trajectory\nnr_iterations[1,type=int]      # Number of inference iterations\nnr_agents[1,type=int]          # Number of agents in the simulation\nsoftmin_temperature[1,type=float] # Temperature parameter for the softmin function\nintermediate_steps[1,type=int] # Intermediate results saving interval\nsave_intermediates[1,type=bool] # Whether to save intermediate results\n\n# State space matrices\nA[4,4,type=float]              # State transition matrix\nB[4,2,type=float]              # Control input matrix\nC[2,4,type=float]              # Observation matrix\n\n# Prior distributions\ninitial_state_variance[1,type=float]    # Prior on initial state\ncontrol_variance[1,type=float]          # Prior on control inputs\ngoal_constraint_variance[1,type=float]  # Goal constraints variance\ngamma_shape[1,type=float]               # Parameters for GammaShapeRate prior\ngamma_scale_factor[1,type=float]        # Parameters for GammaShapeRate prior\n\n# Visualization parameters\nx_limits[2,type=float]            # Plot boundaries (x-axis)\ny_limits[2,type=float]            # Plot boundaries (y-axis)\nfps[1,type=int]                   # Animation frames per second\nheatmap_resolution[1,type=int]    # Heatmap resolution\nplot_width[1,type=int]            # Plot width\nplot_height[1,type=int]           # Plot height\nagent_alpha[1,type=float]         # Visualization alpha for agents\ntarget_alpha[1,type=float]        # Visualization alpha for targets\ncolor_palette[1,type=string]      # Color palette for visualization\n\n# Environment definitions\ndoor_obstacle_center_1[2,type=float]    # Door environment, obstacle 1 center\ndoor_obstacle_size_1[2,type=float]      # Door environment, obstacle 1 size\ndoor_obstacle_center_2[2,type=float]    # Door environment, obstacle 2 center\ndoor_obstacle_size_2[2,type=float]      # Door environment, obstacle 2 size\n\nwall_obstacle_center[2,type=float]      # Wall environment, obstacle center\nwall_obstacle_size[2,type=float]        # Wall environment, obstacle size\n\ncombined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center\ncombined_obstacle_size_1[2,type=float]   # Combined environment, obstacle 1 size\ncombined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center\ncombined_obstacle_size_2[2,type=float]   # Combined environment, obstacle 2 size\ncombined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center\ncombined_obstacle_size_3[2,type=float]   # Combined environment, obstacle 3 size\n\n# Agent configurations\nagent1_id[1,type=int]                   # Agent 1 ID\nagent1_radius[1,type=float]             # Agent 1 radius\nagent1_initial_position[2,type=float]   # Agent 1 initial position\nagent1_target_position[2,type=float]    # Agent 1 target position\n\nagent2_id[1,type=int]                   # Agent 2 ID\nagent2_radius[1,type=float]             # Agent 2 radius\nagent2_initial_position[2,type=float]   # Agent 2 initial position\nagent2_target_position[2,type=float]    # Agent 2 target position\n\nagent3_id[1,type=int]                   # Agent 3 ID\nagent3_radius[1,type=float]             # Agent 3 radius\nagent3_initial_position[2,type=float]   # Agent 3 initial position\nagent3_target_position[2,type=float]    # Agent 3 target position\n\nagent4_id[1,type=int]                   # Agent 4 ID\nagent4_radius[1,type=float]             # Agent 4 radius\nagent4_initial_position[2,type=float]   # Agent 4 initial position\nagent4_target_position[2,type=float]    # Agent 4 target position\n\n# Experiment configurations\nexperiment_seeds[2,type=int]            # Random seeds for reproducibility\nresults_dir[1,type=string]              # Base directory for results\nanimation_template[1,type=string]       # Filename template for animations\ncontrol_vis_filename[1,type=string]     # Filename for control visualization\nobstacle_distance_filename[1,type=string] # Filename for obstacle distance plot\npath_uncertainty_filename[1,type=string]  # Filename for path uncertainty plot\nconvergence_filename[1,type=string]       # Filename for convergence plot",
    "Connections": "# Model parameters\ndt > A\n(A, B, C) > state_space_model\n\n# Agent trajectories\n(state_space_model, initial_state_variance, control_variance) > agent_trajectories\n\n# Goal constraints\n(agent_trajectories, goal_constraint_variance) > goal_directed_behavior\n\n# Obstacle avoidance\n(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance\n\n# Collision avoidance\n(agent_trajectories, nr_agents) > collision_avoidance\n\n# Complete planning system\n(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system",
    "InitialParameterization": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
    "InitialParameterization_raw_content": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
    "Equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t,  w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t,                v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
    "Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
    "ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance",
    "ModelParameters": "nr_agents=4\nnr_steps=40\nnr_iterations=350",
    "Footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl",
    "Signature": "Creator: AI Assistant for GNN\nDate: 2024-07-27\nStatus: Example for RxInfer.jl multi-agent trajectory planning"
  },
  "other_sections": {},
  "gnnsection": {},
  "gnnversionandflags": {},
  "equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t,  w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t,                v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
  "ModelParameters": {},
  "footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl"
}
rxinfer_multiagent_gnn.json

Text/Log Files

rxinfer_multiagent_gnn.txt

GNN Model Summary: Multi-agent Trajectory Planning
Source File: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md

Metadata:
  description: This model represents a multi-agent trajectory planning scenario in RxInfer.jl.
It includes:
- State space model for agents moving in a 2D environment
- Obstacle avoidance constraints
- Goal-directed behavior
- Inter-agent collision avoidance
The model can be used to simulate trajectory planning in various environments with obstacles.

States (60):
  - ID: dt (dimensions=1,type=float, original_id=dt)
  - ID: gamma (dimensions=1,type=float, original_id=gamma)
  - ID: nr_steps (dimensions=1,type=int, original_id=nr_steps)
  - ID: nr_iterations (dimensions=1,type=int, original_id=nr_iterations)
  - ID: nr_agents (dimensions=1,type=int, original_id=nr_agents)
  - ID: softmin_temperature (dimensions=1,type=float, original_id=softmin_temperature)
  - ID: intermediate_steps (dimensions=1,type=int, original_id=intermediate_steps)
  - ID: save_intermediates (dimensions=1,type=bool, original_id=save_intermediates)
  - ID: A (dimensions=4,4,type=float, original_id=A)
  - ID: B (dimensions=4,2,type=float, original_id=B)
  - ID: C (dimensions=2,4,type=float, original_id=C)
  - ID: initial_state_variance (dimensions=1,type=float, original_id=initial_state_variance)
  - ID: control_variance (dimensions=1,type=float, original_id=control_variance)
  - ID: goal_constraint_variance (dimensions=1,type=float, original_id=goal_constraint_variance)
  - ID: gamma_shape (dimensions=1,type=float, original_id=gamma_shape)
  - ID: gamma_scale_factor (dimensions=1,type=float, original_id=gamma_scale_factor)
  - ID: x_limits (dimensions=2,type=float, original_id=x_limits)
  - ID: y_limits (dimensions=2,type=float, original_id=y_limits)
  - ID: fps (dimensions=1,type=int, original_id=fps)
  - ID: heatmap_resolution (dimensions=1,type=int, original_id=heatmap_resolution)
  - ID: plot_width (dimensions=1,type=int, original_id=plot_width)
  - ID: plot_height (dimensions=1,type=int, original_id=plot_height)
  - ID: agent_alpha (dimensions=1,type=float, original_id=agent_alpha)
  - ID: target_alpha (dimensions=1,type=float, original_id=target_alpha)
  - ID: color_palette (dimensions=1,type=string, original_id=color_palette)
  - ID: door_obstacle_center_1 (dimensions=2,type=float, original_id=door_obstacle_center_1)
  - ID: door_obstacle_size_1 (dimensions=2,type=float, original_id=door_obstacle_size_1)
  - ID: door_obstacle_center_2 (dimensions=2,type=float, original_id=door_obstacle_center_2)
  - ID: door_obstacle_size_2 (dimensions=2,type=float, original_id=door_obstacle_size_2)
  - ID: wall_obstacle_center (dimensions=2,type=float, original_id=wall_obstacle_center)
  - ID: wall_obstacle_size (dimensions=2,type=float, original_id=wall_obstacle_size)
  - ID: combined_obstacle_center_1 (dimensions=2,type=float, original_id=combined_obstacle_center_1)
  - ID: combined_obstacle_size_1 (dimensions=2,type=float, original_id=combined_obstacle_size_1)
  - ID: combined_obstacle_center_2 (dimensions=2,type=float, original_id=combined_obstacle_center_2)
  - ID: combined_obstacle_size_2 (dimensions=2,type=float, original_id=combined_obstacle_size_2)
  - ID: combined_obstacle_center_3 (dimensions=2,type=float, original_id=combined_obstacle_center_3)
  - ID: combined_obstacle_size_3 (dimensions=2,type=float, original_id=combined_obstacle_size_3)
  - ID: agent1_id (dimensions=1,type=int, original_id=agent1_id)
  - ID: agent1_radius (dimensions=1,type=float, original_id=agent1_radius)
  - ID: agent1_initial_position (dimensions=2,type=float, original_id=agent1_initial_position)
  - ID: agent1_target_position (dimensions=2,type=float, original_id=agent1_target_position)
  - ID: agent2_id (dimensions=1,type=int, original_id=agent2_id)
  - ID: agent2_radius (dimensions=1,type=float, original_id=agent2_radius)
  - ID: agent2_initial_position (dimensions=2,type=float, original_id=agent2_initial_position)
  - ID: agent2_target_position (dimensions=2,type=float, original_id=agent2_target_position)
  - ID: agent3_id (dimensions=1,type=int, original_id=agent3_id)
  - ID: agent3_radius (dimensions=1,type=float, original_id=agent3_radius)
  - ID: agent3_initial_position (dimensions=2,type=float, original_id=agent3_initial_position)
  - ID: agent3_target_position (dimensions=2,type=float, original_id=agent3_target_position)
  - ID: agent4_id (dimensions=1,type=int, original_id=agent4_id)
  - ID: agent4_radius (dimensions=1,type=float, original_id=agent4_radius)
  - ID: agent4_initial_position (dimensions=2,type=float, original_id=agent4_initial_position)
  - ID: agent4_target_position (dimensions=2,type=float, original_id=agent4_target_position)
  - ID: experiment_seeds (dimensions=2,type=int, original_id=experiment_seeds)
  - ID: results_dir (dimensions=1,type=string, original_id=results_dir)
  - ID: animation_template (dimensions=1,type=string, original_id=animation_template)
  - ID: control_vis_filename (dimensions=1,type=string, original_id=control_vis_filename)
  - ID: obstacle_distance_filename (dimensions=1,type=string, original_id=obstacle_distance_filename)
  - ID: path_uncertainty_filename (dimensions=1,type=string, original_id=path_uncertainty_filename)
  - ID: convergence_filename (dimensions=1,type=string, original_id=convergence_filename)

Initial Parameters (0):

General Parameters (0):

Observations (0):

Transitions (7):
  - None -> None
  - None -> None
  - None -> None
  - None -> None
  - None -> None
  - None -> None
  - None -> None

Ontology Annotations (12):
  dt = TimeStep
  gamma = ConstraintParameter
  nr_steps = TrajectoryLength
  nr_iterations = InferenceIterations
  nr_agents = NumberOfAgents
  softmin_temperature = SoftminTemperature
  A = StateTransitionMatrix
  B = ControlInputMatrix
  C = ObservationMatrix
  initial_state_variance = InitialStateVariance

... (file truncated, total lines: 103)
rxinfer_multiagent_gnn.txt

Other Files

GNN Processing Summary (Overall File List)

📊 GNN Processing Summary

🗓️ Generated: 2025-06-06 13:10:58

⚙️ Processing Configuration

📁 GNN Files Discovered

Found 2 GNN files for processing:

🔄 Pipeline Execution Status

Pipeline execution data not available.

📊 Output Summary

🔍 Key Findings

📋 Recommendations

General Improvements


Report generated by GNN Processing Pipeline Step 5 (Export)

GNN Visualizations (Step 6)

Visualizations for pymdp_pomdp_agent: pymdp_pomdp_agent

Images

Markdown Reports

file_content.md

GNN File: src/gnn/examples/pymdp_pomdp_agent.md\n\n## Raw File Content\n\n```\n# GNN Example: Multifactor PyMDP Agent

Format: Markdown representation of a Multifactor PyMDP model in Active Inference format

Version: 1.0

This file is machine-readable and attempts to represent a PyMDP agent with multiple observation modalities and hidden state factors.

GNNSection

MultifactorPyMDPAgent

GNNVersionAndFlags

GNN v1

ModelName

Multifactor PyMDP Agent v1

ModelAnnotation

This model represents a PyMDP agent with multiple observation modalities and hidden state factors. - Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes) - Hidden state factors: "reward_level" (2 states), "decision_state" (3 states) - Control: "decision_state" factor is controllable with 3 possible actions. The parameterization is derived from a PyMDP Python script example.

StateSpaceBlock

A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]

A_m0[3,2,3,type=float] # Likelihood for modality 0 ("state_observation") A_m1[3,2,3,type=float] # Likelihood for modality 1 ("reward") A_m2[3,2,3,type=float] # Likelihood for modality 2 ("decision_proprioceptive")

B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]

B_f0[2,2,1,type=float] # Transitions for factor 0 ("reward_level"), 1 implicit action (uncontrolled) B_f1[3,3,3,type=float] # Transitions for factor 1 ("decision_state"), 3 actions

C_vectors are defined per modality: C_m[observation_outcomes]

C_m0[3,type=float] # Preferences for modality 0 C_m1[3,type=float] # Preferences for modality 1 C_m2[3,type=float] # Preferences for modality 2

D_vectors are defined per hidden state factor: D_f[states]

D_f0[2,type=float] # Prior for factor 0 D_f1[3,type=float] # Prior for factor 1

Hidden States

s_f0[2,1,type=float] # Hidden state for factor 0 ("reward_level") s_f1[3,1,type=float] # Hidden state for factor 1 ("decision_state") s_prime_f0[2,1,type=float] # Next hidden state for factor 0 s_prime_f1[3,1,type=float] # Next hidden state for factor 1

Observations

o_m0[3,1,type=float] # Observation for modality 0 o_m1[3,1,type=float] # Observation for modality 1 o_m2[3,1,type=float] # Observation for modality 2

Policy and Control

π_f1[3,type=float] # Policy (distribution over actions) for controllable factor 1 u_f1[1,type=int] # Action taken for controllable factor 1 G[1,type=float] # Expected Free Energy (overall, or can be per policy) t[1,type=int] # Time step

Connections

(D_f0,D_f1)-(s_f0,s_f1) (s_f0,s_f1)-(A_m0,A_m1,A_m2) (A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2) (s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled (B_f0,B_f1)-(s_prime_f0,s_prime_f1) (C_m0,C_m1,C_m2)>G G>π_f1 π_f1-u_f1 G=ExpectedFreeEnergy t=Time

InitialParameterization

A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]

A[0][:, :, 0] = np.ones((3,2))/3

A[0][:, :, 1] = np.ones((3,2))/3

A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)

A_m0={ ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1) ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1 ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2 }

A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3

A[1][2, :, 0] = [1.0,1.0]

A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]

A[1][2, :, 2] = [1.0,1.0]

Others are 0.

A_m1={ ( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0 ( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1 ( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2 }

A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3

A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0

Others are 0.

A_m2={ ( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0 ( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1 ( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2 }

B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]

B_f0 = eye(2)

B_f0={ ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0) ( (0.0),(1.0) ) # s_next=1 }

B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]

B_f1[:,:,action_idx] = eye(3) for each action

B_f1={ ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ... ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1 ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2 }

C_m0: num_obs[0]=3. Defaults to zeros.

C_m0={(0.0,0.0,0.0)}

C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0

C_m1={(1.0,-2.0,0.0)}

C_m2: num_obs[2]=3. Defaults to zeros.

C_m2={(0.0,0.0,0.0)}

D_f0: factor 0 (2 states). Uniform prior.

D_f0={(0.5,0.5)}

D_f1: factor 1 (3 states). Uniform prior.

D_f1={(0.33333,0.33333,0.33333)}

Equations

Standard PyMDP agent equations for state inference (infer_states),

policy inference (infer_policies), and action sampling (sample_action).

qs = infer_states(o)

q_pi, efe = infer_policies()

action = sample_action()

Time

Dynamic DiscreteTime=t ModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.

ActInfOntologyAnnotation

A_m0=LikelihoodMatrixModality0 A_m1=LikelihoodMatrixModality1 A_m2=LikelihoodMatrixModality2 B_f0=TransitionMatrixFactor0 B_f1=TransitionMatrixFactor1 C_m0=LogPreferenceVectorModality0 C_m1=LogPreferenceVectorModality1 C_m2=LogPreferenceVectorModality2 D_f0=PriorOverHiddenStatesFactor0 D_f1=PriorOverHiddenStatesFactor1 s_f0=HiddenStateFactor0 s_f1=HiddenStateFactor1 s_prime_f0=NextHiddenStateFactor0 s_prime_f1=NextHiddenStateFactor1 o_m0=ObservationModality0 o_m1=ObservationModality1 o_m2=ObservationModality2 π_f1=PolicyVectorFactor1 # Distribution over actions for factor 1 u_f1=ActionFactor1 # Chosen action for factor 1 G=ExpectedFreeEnergy

ModelParameters

num_hidden_states_factors: [2, 3] # s_f0[2], s_f1[3] num_obs_modalities: [3, 3, 3] # o_m0[3], o_m1[3], o_m2[3] num_control_factors: [1, 3] # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)

Footer

Multifactor PyMDP Agent v1 - GNN Representation

Signature

NA \n```\n\n## Parsed Sections

_HeaderComments

# GNN Example: Multifactor PyMDP Agent
# Format: Markdown representation of a Multifactor PyMDP model in Active Inference format
# Version: 1.0
# This file is machine-readable and attempts to represent a PyMDP agent with multiple observation modalities and hidden state factors.

ModelName

Multifactor PyMDP Agent v1

GNNSection

MultifactorPyMDPAgent

GNNVersionAndFlags

GNN v1

ModelAnnotation

This model represents a PyMDP agent with multiple observation modalities and hidden state factors.
- Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes)
- Hidden state factors: "reward_level" (2 states), "decision_state" (3 states)
- Control: "decision_state" factor is controllable with 3 possible actions.
The parameterization is derived from a PyMDP Python script example.

StateSpaceBlock

# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]
A_m0[3,2,3,type=float]   # Likelihood for modality 0 ("state_observation")
A_m1[3,2,3,type=float]   # Likelihood for modality 1 ("reward")
A_m2[3,2,3,type=float]   # Likelihood for modality 2 ("decision_proprioceptive")

# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]
B_f0[2,2,1,type=float]   # Transitions for factor 0 ("reward_level"), 1 implicit action (uncontrolled)
B_f1[3,3,3,type=float]   # Transitions for factor 1 ("decision_state"), 3 actions

# C_vectors are defined per modality: C_m[observation_outcomes]
C_m0[3,type=float]       # Preferences for modality 0
C_m1[3,type=float]       # Preferences for modality 1
C_m2[3,type=float]       # Preferences for modality 2

# D_vectors are defined per hidden state factor: D_f[states]
D_f0[2,type=float]       # Prior for factor 0
D_f1[3,type=float]       # Prior for factor 1

# Hidden States
s_f0[2,1,type=float]     # Hidden state for factor 0 ("reward_level")
s_f1[3,1,type=float]     # Hidden state for factor 1 ("decision_state")
s_prime_f0[2,1,type=float] # Next hidden state for factor 0
s_prime_f1[3,1,type=float] # Next hidden state for factor 1

# Observations
o_m0[3,1,type=float]     # Observation for modality 0
o_m1[3,1,type=float]     # Observation for modality 1
o_m2[3,1,type=float]     # Observation for modality 2

# Policy and Control
π_f1[3,type=float]       # Policy (distribution over actions) for controllable factor 1
u_f1[1,type=int]         # Action taken for controllable factor 1
G[1,type=float]          # Expected Free Energy (overall, or can be per policy)
t[1,type=int]            # Time step

Connections

(D_f0,D_f1)-(s_f0,s_f1)
(s_f0,s_f1)-(A_m0,A_m1,A_m2)
(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)
(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled
(B_f0,B_f1)-(s_prime_f0,s_prime_f1)
(C_m0,C_m1,C_m2)>G
G>π_f1
π_f1-u_f1
G=ExpectedFreeEnergy
t=Time

InitialParameterization

# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]
# A[0][:, :, 0] = np.ones((3,2))/3
# A[0][:, :, 1] = np.ones((3,2))/3
# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)
A_m0={
  ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ),  # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)
  ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ),  # obs=1
  ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) )   # obs=2
}

# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3
# A[1][2, :, 0] = [1.0,1.0]
# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]
# A[1][2, :, 2] = [1.0,1.0]
# Others are 0.
A_m1={
  ( (0.0,0.731,0.0), (0.0,0.269,0.0) ),  # obs=0
  ( (0.0,0.269,0.0), (0.0,0.731,0.0) ),  # obs=1
  ( (1.0,0.0,1.0), (1.0,0.0,1.0) )      # obs=2
}

# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3
# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0
# Others are 0.
A_m2={
  ( (1.0,0.0,0.0), (1.0,0.0,0.0) ),  # obs=0
  ( (0.0,1.0,0.0), (0.0,1.0,0.0) ),  # obs=1
  ( (0.0,0.0,1.0), (0.0,0.0,1.0) )   # obs=2
}

# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]
# B_f0 = eye(2)
B_f0={
  ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)
  ( (0.0),(1.0) )  # s_next=1
}

# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]
# B_f1[:,:,action_idx] = eye(3) for each action
B_f1={
  ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...
  ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1
  ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) )  # s_next=2
}

# C_m0: num_obs[0]=3. Defaults to zeros.
C_m0={(0.0,0.0,0.0)}

# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0
C_m1={(1.0,-2.0,0.0)}

# C_m2: num_obs[2]=3. Defaults to zeros.
C_m2={(0.0,0.0,0.0)}

# D_f0: factor 0 (2 states). Uniform prior.
D_f0={(0.5,0.5)}

# D_f1: factor 1 (3 states). Uniform prior.
D_f1={(0.33333,0.33333,0.33333)}

Equations

# Standard PyMDP agent equations for state inference (infer_states),
# policy inference (infer_policies), and action sampling (sample_action).
# qs = infer_states(o)
# q_pi, efe = infer_policies()
# action = sample_action()

Time

Dynamic
DiscreteTime=t
ModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.

ActInfOntologyAnnotation

A_m0=LikelihoodMatrixModality0
A_m1=LikelihoodMatrixModality1
A_m2=LikelihoodMatrixModality2
B_f0=TransitionMatrixFactor0
B_f1=TransitionMatrixFactor1
C_m0=LogPreferenceVectorModality0
C_m1=LogPreferenceVectorModality1
C_m2=LogPreferenceVectorModality2
D_f0=PriorOverHiddenStatesFactor0
D_f1=PriorOverHiddenStatesFactor1
s_f0=HiddenStateFactor0
s_f1=HiddenStateFactor1
s_prime_f0=NextHiddenStateFactor0
s_prime_f1=NextHiddenStateFactor1
o_m0=ObservationModality0
o_m1=ObservationModality1
o_m2=ObservationModality2
π_f1=PolicyVectorFactor1 # Distribution over actions for factor 1
u_f1=ActionFactor1       # Chosen action for factor 1
G=ExpectedFreeEnergy

ModelParameters

num_hidden_states_factors: [2, 3]  # s_f0[2], s_f1[3]
num_obs_modalities: [3, 3, 3]     # o_m0[3], o_m1[3], o_m2[3]
num_control_factors: [1, 3]   # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)

Footer

Multifactor PyMDP Agent v1 - GNN Representation

Signature

NA

JSON Files

full_model_data.json

{
  "_HeaderComments": "# GNN Example: Multifactor PyMDP Agent\n# Format: Markdown representation of a Multifactor PyMDP model in Active Inference format\n# Version: 1.0\n# This file is machine-readable and attempts to represent a PyMDP agent with multiple observation modalities and hidden state factors.",
  "ModelName": "Multifactor PyMDP Agent v1",
  "GNNSection": "MultifactorPyMDPAgent",
  "GNNVersionAndFlags": "GNN v1",
  "ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
  "StateSpaceBlock": "# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]\nA_m0[3,2,3,type=float]   # Likelihood for modality 0 (\"state_observation\")\nA_m1[3,2,3,type=float]   # Likelihood for modality 1 (\"reward\")\nA_m2[3,2,3,type=float]   # Likelihood for modality 2 (\"decision_proprioceptive\")\n\n# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]\nB_f0[2,2,1,type=float]   # Transitions for factor 0 (\"reward_level\"), 1 implicit action (uncontrolled)\nB_f1[3,3,3,type=float]   # Transitions for factor 1 (\"decision_state\"), 3 actions\n\n# C_vectors are defined per modality: C_m[observation_outcomes]\nC_m0[3,type=float]       # Preferences for modality 0\nC_m1[3,type=float]       # Preferences for modality 1\nC_m2[3,type=float]       # Preferences for modality 2\n\n# D_vectors are defined per hidden state factor: D_f[states]\nD_f0[2,type=float]       # Prior for factor 0\nD_f1[3,type=float]       # Prior for factor 1\n\n# Hidden States\ns_f0[2,1,type=float]     # Hidden state for factor 0 (\"reward_level\")\ns_f1[3,1,type=float]     # Hidden state for factor 1 (\"decision_state\")\ns_prime_f0[2,1,type=float] # Next hidden state for factor 0\ns_prime_f1[3,1,type=float] # Next hidden state for factor 1\n\n# Observations\no_m0[3,1,type=float]     # Observation for modality 0\no_m1[3,1,type=float]     # Observation for modality 1\no_m2[3,1,type=float]     # Observation for modality 2\n\n# Policy and Control\n\u03c0_f1[3,type=float]       # Policy (distribution over actions) for controllable factor 1\nu_f1[1,type=int]         # Action taken for controllable factor 1\nG[1,type=float]          # Expected Free Energy (overall, or can be per policy)\nt[1,type=int]            # Time step",
  "Connections": "(D_f0,D_f1)-(s_f0,s_f1)\n(s_f0,s_f1)-(A_m0,A_m1,A_m2)\n(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)\n(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled\n(B_f0,B_f1)-(s_prime_f0,s_prime_f1)\n(C_m0,C_m1,C_m2)>G\nG>\u03c0_f1\n\u03c0_f1-u_f1\nG=ExpectedFreeEnergy\nt=Time",
  "InitialParameterization": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n  ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ),  # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n  ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ),  # obs=1\n  ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) )   # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n  ( (0.0,0.731,0.0), (0.0,0.269,0.0) ),  # obs=0\n  ( (0.0,0.269,0.0), (0.0,0.731,0.0) ),  # obs=1\n  ( (1.0,0.0,1.0), (1.0,0.0,1.0) )      # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n  ( (1.0,0.0,0.0), (1.0,0.0,0.0) ),  # obs=0\n  ( (0.0,1.0,0.0), (0.0,1.0,0.0) ),  # obs=1\n  ( (0.0,0.0,1.0), (0.0,0.0,1.0) )   # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n  ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n  ( (0.0),(1.0) )  # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n  ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n  ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n  ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) )  # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
  "Equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
  "Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
  "ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1       # Chosen action for factor 1\nG=ExpectedFreeEnergy",
  "ModelParameters": "num_hidden_states_factors: [2, 3]  # s_f0[2], s_f1[3]\nnum_obs_modalities: [3, 3, 3]     # o_m0[3], o_m1[3], o_m2[3]\nnum_control_factors: [1, 3]   # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)",
  "Footer": "Multifactor PyMDP Agent v1 - GNN Representation",
  "Signature": "NA"
}
full_model_data.json

model_metadata.json

{
  "ModelName": "Multifactor PyMDP Agent v1",
  "ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
  "GNNVersionAndFlags": "GNN v1",
  "Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
  "ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1       # Chosen action for factor 1\nG=ExpectedFreeEnergy"
}
model_metadata.json

Visualizations for rxinfer_multiagent_gnn: rxinfer_multiagent_gnn

Images

Markdown Reports

file_content.md

GNN File: src/gnn/examples/rxinfer_multiagent_gnn.md\n\n## Raw File Content\n\n```\n# GNN Example: RxInfer Multi-agent Trajectory Planning

Format: Markdown representation of a Multi-agent Trajectory Planning model for RxInfer.jl

Version: 1.0

This file is machine-readable and represents the configuration for the RxInfer.jl multi-agent trajectory planning example.

GNNSection

RxInferMultiAgentTrajectoryPlanning

GNNVersionAndFlags

GNN v1

ModelName

Multi-agent Trajectory Planning

ModelAnnotation

This model represents a multi-agent trajectory planning scenario in RxInfer.jl. It includes: - State space model for agents moving in a 2D environment - Obstacle avoidance constraints - Goal-directed behavior - Inter-agent collision avoidance The model can be used to simulate trajectory planning in various environments with obstacles.

StateSpaceBlock

Model parameters

dt[1,type=float] # Time step for the state space model gamma[1,type=float] # Constraint parameter for the Halfspace node nr_steps[1,type=int] # Number of time steps in the trajectory nr_iterations[1,type=int] # Number of inference iterations nr_agents[1,type=int] # Number of agents in the simulation softmin_temperature[1,type=float] # Temperature parameter for the softmin function intermediate_steps[1,type=int] # Intermediate results saving interval save_intermediates[1,type=bool] # Whether to save intermediate results

State space matrices

A[4,4,type=float] # State transition matrix B[4,2,type=float] # Control input matrix C[2,4,type=float] # Observation matrix

Prior distributions

initial_state_variance[1,type=float] # Prior on initial state control_variance[1,type=float] # Prior on control inputs goal_constraint_variance[1,type=float] # Goal constraints variance gamma_shape[1,type=float] # Parameters for GammaShapeRate prior gamma_scale_factor[1,type=float] # Parameters for GammaShapeRate prior

Visualization parameters

x_limits[2,type=float] # Plot boundaries (x-axis) y_limits[2,type=float] # Plot boundaries (y-axis) fps[1,type=int] # Animation frames per second heatmap_resolution[1,type=int] # Heatmap resolution plot_width[1,type=int] # Plot width plot_height[1,type=int] # Plot height agent_alpha[1,type=float] # Visualization alpha for agents target_alpha[1,type=float] # Visualization alpha for targets color_palette[1,type=string] # Color palette for visualization

Environment definitions

door_obstacle_center_1[2,type=float] # Door environment, obstacle 1 center door_obstacle_size_1[2,type=float] # Door environment, obstacle 1 size door_obstacle_center_2[2,type=float] # Door environment, obstacle 2 center door_obstacle_size_2[2,type=float] # Door environment, obstacle 2 size

wall_obstacle_center[2,type=float] # Wall environment, obstacle center wall_obstacle_size[2,type=float] # Wall environment, obstacle size

combined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center combined_obstacle_size_1[2,type=float] # Combined environment, obstacle 1 size combined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center combined_obstacle_size_2[2,type=float] # Combined environment, obstacle 2 size combined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center combined_obstacle_size_3[2,type=float] # Combined environment, obstacle 3 size

Agent configurations

agent1_id[1,type=int] # Agent 1 ID agent1_radius[1,type=float] # Agent 1 radius agent1_initial_position[2,type=float] # Agent 1 initial position agent1_target_position[2,type=float] # Agent 1 target position

agent2_id[1,type=int] # Agent 2 ID agent2_radius[1,type=float] # Agent 2 radius agent2_initial_position[2,type=float] # Agent 2 initial position agent2_target_position[2,type=float] # Agent 2 target position

agent3_id[1,type=int] # Agent 3 ID agent3_radius[1,type=float] # Agent 3 radius agent3_initial_position[2,type=float] # Agent 3 initial position agent3_target_position[2,type=float] # Agent 3 target position

agent4_id[1,type=int] # Agent 4 ID agent4_radius[1,type=float] # Agent 4 radius agent4_initial_position[2,type=float] # Agent 4 initial position agent4_target_position[2,type=float] # Agent 4 target position

Experiment configurations

experiment_seeds[2,type=int] # Random seeds for reproducibility results_dir[1,type=string] # Base directory for results animation_template[1,type=string] # Filename template for animations control_vis_filename[1,type=string] # Filename for control visualization obstacle_distance_filename[1,type=string] # Filename for obstacle distance plot path_uncertainty_filename[1,type=string] # Filename for path uncertainty plot convergence_filename[1,type=string] # Filename for convergence plot

Connections

Model parameters

dt > A (A, B, C) > state_space_model

Agent trajectories

(state_space_model, initial_state_variance, control_variance) > agent_trajectories

Goal constraints

(agent_trajectories, goal_constraint_variance) > goal_directed_behavior

Obstacle avoidance

(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance

Collision avoidance

(agent_trajectories, nr_agents) > collision_avoidance

Complete planning system

(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system

InitialParameterization

Model parameters

dt=1.0 gamma=1.0 nr_steps=40 nr_iterations=350 nr_agents=4 softmin_temperature=10.0 intermediate_steps=10 save_intermediates=false

State space matrices

A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]

A={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}

B = [0 0; dt 0; 0 0; 0 dt]

B={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}

C = [1 0 0 0; 0 0 1 0]

C={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}

Prior distributions

initial_state_variance=100.0 control_variance=0.1 goal_constraint_variance=0.00001 gamma_shape=1.5 gamma_scale_factor=0.5

Visualization parameters

x_limits={(-20, 20)} y_limits={(-20, 20)} fps=15 heatmap_resolution=100 plot_width=800 plot_height=400 agent_alpha=1.0 target_alpha=0.2 color_palette="tab10"

Environment definitions

door_obstacle_center_1={(-40.0, 0.0)} door_obstacle_size_1={(70.0, 5.0)} door_obstacle_center_2={(40.0, 0.0)} door_obstacle_size_2={(70.0, 5.0)}

wall_obstacle_center={(0.0, 0.0)} wall_obstacle_size={(10.0, 5.0)}

combined_obstacle_center_1={(-50.0, 0.0)} combined_obstacle_size_1={(70.0, 2.0)} combined_obstacle_center_2={(50.0, 0.0)} combined_obstacle_size_2={(70.0, 2.0)} combined_obstacle_center_3={(5.0, -1.0)} combined_obstacle_size_3={(3.0, 10.0)}

Agent configurations

agent1_id=1 agent1_radius=2.5 agent1_initial_position={(-4.0, 10.0)} agent1_target_position={(-10.0, -10.0)}

agent2_id=2 agent2_radius=1.5 agent2_initial_position={(-10.0, 5.0)} agent2_target_position={(10.0, -15.0)}

agent3_id=3 agent3_radius=1.0 agent3_initial_position={(-15.0, -10.0)} agent3_target_position={(10.0, 10.0)}

agent4_id=4 agent4_radius=2.5 agent4_initial_position={(0.0, -10.0)} agent4_target_position={(-10.0, 15.0)}

Experiment configurations

experiment_seeds={(42, 123)} results_dir="results" animation_template="{environment}_{seed}.gif" control_vis_filename="control_signals.gif" obstacle_distance_filename="obstacle_distance.png" path_uncertainty_filename="path_uncertainty.png" convergence_filename="convergence.png"

Equations

State space model:

x_{t+1} = A * x_t + B * u_t + w_t, w_t ~ N(0, control_variance)

y_t = C * x_t + v_t, v_t ~ N(0, observation_variance)

Obstacle avoidance constraint:

p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)

where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle

Goal constraint:

p(x_T | goal) ~ N(goal, goal_constraint_variance)

where x_T is the final position

Collision avoidance constraint:

p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)

where x_i, x_j are positions of agents i and j, r_i, r_j are their radii

Time

Dynamic DiscreteTime ModelTimeHorizon=nr_steps

ActInfOntologyAnnotation

dt=TimeStep gamma=ConstraintParameter nr_steps=TrajectoryLength nr_iterations=InferenceIterations nr_agents=NumberOfAgents softmin_temperature=SoftminTemperature A=StateTransitionMatrix B=ControlInputMatrix C=ObservationMatrix initial_state_variance=InitialStateVariance control_variance=ControlVariance goal_constraint_variance=GoalConstraintVariance

ModelParameters

nr_agents=4 nr_steps=40 nr_iterations=350

Footer

Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl

Signature

Creator: AI Assistant for GNN Date: 2024-07-27 Status: Example for RxInfer.jl multi-agent trajectory planning \n```\n\n## Parsed Sections

_HeaderComments

# GNN Example: RxInfer Multi-agent Trajectory Planning
# Format: Markdown representation of a Multi-agent Trajectory Planning model for RxInfer.jl
# Version: 1.0
# This file is machine-readable and represents the configuration for the RxInfer.jl multi-agent trajectory planning example.

ModelName

Multi-agent Trajectory Planning

GNNSection

RxInferMultiAgentTrajectoryPlanning

GNNVersionAndFlags

GNN v1

ModelAnnotation

This model represents a multi-agent trajectory planning scenario in RxInfer.jl.
It includes:
- State space model for agents moving in a 2D environment
- Obstacle avoidance constraints
- Goal-directed behavior
- Inter-agent collision avoidance
The model can be used to simulate trajectory planning in various environments with obstacles.

StateSpaceBlock

# Model parameters
dt[1,type=float]               # Time step for the state space model
gamma[1,type=float]            # Constraint parameter for the Halfspace node
nr_steps[1,type=int]           # Number of time steps in the trajectory
nr_iterations[1,type=int]      # Number of inference iterations
nr_agents[1,type=int]          # Number of agents in the simulation
softmin_temperature[1,type=float] # Temperature parameter for the softmin function
intermediate_steps[1,type=int] # Intermediate results saving interval
save_intermediates[1,type=bool] # Whether to save intermediate results

# State space matrices
A[4,4,type=float]              # State transition matrix
B[4,2,type=float]              # Control input matrix
C[2,4,type=float]              # Observation matrix

# Prior distributions
initial_state_variance[1,type=float]    # Prior on initial state
control_variance[1,type=float]          # Prior on control inputs
goal_constraint_variance[1,type=float]  # Goal constraints variance
gamma_shape[1,type=float]               # Parameters for GammaShapeRate prior
gamma_scale_factor[1,type=float]        # Parameters for GammaShapeRate prior

# Visualization parameters
x_limits[2,type=float]            # Plot boundaries (x-axis)
y_limits[2,type=float]            # Plot boundaries (y-axis)
fps[1,type=int]                   # Animation frames per second
heatmap_resolution[1,type=int]    # Heatmap resolution
plot_width[1,type=int]            # Plot width
plot_height[1,type=int]           # Plot height
agent_alpha[1,type=float]         # Visualization alpha for agents
target_alpha[1,type=float]        # Visualization alpha for targets
color_palette[1,type=string]      # Color palette for visualization

# Environment definitions
door_obstacle_center_1[2,type=float]    # Door environment, obstacle 1 center
door_obstacle_size_1[2,type=float]      # Door environment, obstacle 1 size
door_obstacle_center_2[2,type=float]    # Door environment, obstacle 2 center
door_obstacle_size_2[2,type=float]      # Door environment, obstacle 2 size

wall_obstacle_center[2,type=float]      # Wall environment, obstacle center
wall_obstacle_size[2,type=float]        # Wall environment, obstacle size

combined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center
combined_obstacle_size_1[2,type=float]   # Combined environment, obstacle 1 size
combined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center
combined_obstacle_size_2[2,type=float]   # Combined environment, obstacle 2 size
combined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center
combined_obstacle_size_3[2,type=float]   # Combined environment, obstacle 3 size

# Agent configurations
agent1_id[1,type=int]                   # Agent 1 ID
agent1_radius[1,type=float]             # Agent 1 radius
agent1_initial_position[2,type=float]   # Agent 1 initial position
agent1_target_position[2,type=float]    # Agent 1 target position

agent2_id[1,type=int]                   # Agent 2 ID
agent2_radius[1,type=float]             # Agent 2 radius
agent2_initial_position[2,type=float]   # Agent 2 initial position
agent2_target_position[2,type=float]    # Agent 2 target position

agent3_id[1,type=int]                   # Agent 3 ID
agent3_radius[1,type=float]             # Agent 3 radius
agent3_initial_position[2,type=float]   # Agent 3 initial position
agent3_target_position[2,type=float]    # Agent 3 target position

agent4_id[1,type=int]                   # Agent 4 ID
agent4_radius[1,type=float]             # Agent 4 radius
agent4_initial_position[2,type=float]   # Agent 4 initial position
agent4_target_position[2,type=float]    # Agent 4 target position

# Experiment configurations
experiment_seeds[2,type=int]            # Random seeds for reproducibility
results_dir[1,type=string]              # Base directory for results
animation_template[1,type=string]       # Filename template for animations
control_vis_filename[1,type=string]     # Filename for control visualization
obstacle_distance_filename[1,type=string] # Filename for obstacle distance plot
path_uncertainty_filename[1,type=string]  # Filename for path uncertainty plot
convergence_filename[1,type=string]       # Filename for convergence plot

Connections

# Model parameters
dt > A
(A, B, C) > state_space_model

# Agent trajectories
(state_space_model, initial_state_variance, control_variance) > agent_trajectories

# Goal constraints
(agent_trajectories, goal_constraint_variance) > goal_directed_behavior

# Obstacle avoidance
(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance

# Collision avoidance
(agent_trajectories, nr_agents) > collision_avoidance

# Complete planning system
(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system

InitialParameterization

# Model parameters
dt=1.0
gamma=1.0
nr_steps=40
nr_iterations=350
nr_agents=4
softmin_temperature=10.0
intermediate_steps=10
save_intermediates=false

# State space matrices
# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]
A={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}

# B = [0 0; dt 0; 0 0; 0 dt]
B={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}

# C = [1 0 0 0; 0 0 1 0]
C={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}

# Prior distributions
initial_state_variance=100.0
control_variance=0.1
goal_constraint_variance=0.00001
gamma_shape=1.5
gamma_scale_factor=0.5

# Visualization parameters
x_limits={(-20, 20)}
y_limits={(-20, 20)}
fps=15
heatmap_resolution=100
plot_width=800
plot_height=400
agent_alpha=1.0
target_alpha=0.2
color_palette="tab10"

# Environment definitions
door_obstacle_center_1={(-40.0, 0.0)}
door_obstacle_size_1={(70.0, 5.0)}
door_obstacle_center_2={(40.0, 0.0)}
door_obstacle_size_2={(70.0, 5.0)}

wall_obstacle_center={(0.0, 0.0)}
wall_obstacle_size={(10.0, 5.0)}

combined_obstacle_center_1={(-50.0, 0.0)}
combined_obstacle_size_1={(70.0, 2.0)}
combined_obstacle_center_2={(50.0, 0.0)}
combined_obstacle_size_2={(70.0, 2.0)}
combined_obstacle_center_3={(5.0, -1.0)}
combined_obstacle_size_3={(3.0, 10.0)}

# Agent configurations
agent1_id=1
agent1_radius=2.5
agent1_initial_position={(-4.0, 10.0)}
agent1_target_position={(-10.0, -10.0)}

agent2_id=2
agent2_radius=1.5
agent2_initial_position={(-10.0, 5.0)}
agent2_target_position={(10.0, -15.0)}

agent3_id=3
agent3_radius=1.0
agent3_initial_position={(-15.0, -10.0)}
agent3_target_position={(10.0, 10.0)}

agent4_id=4
agent4_radius=2.5
agent4_initial_position={(0.0, -10.0)}
agent4_target_position={(-10.0, 15.0)}

# Experiment configurations
experiment_seeds={(42, 123)}
results_dir="results"
animation_template="{environment}_{seed}.gif"
control_vis_filename="control_signals.gif"
obstacle_distance_filename="obstacle_distance.png"
path_uncertainty_filename="path_uncertainty.png"
convergence_filename="convergence.png"

Equations

# State space model:
# x_{t+1} = A * x_t + B * u_t + w_t,  w_t ~ N(0, control_variance)
# y_t = C * x_t + v_t,                v_t ~ N(0, observation_variance)
#
# Obstacle avoidance constraint:
# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)
# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle
#
# Goal constraint:
# p(x_T | goal) ~ N(goal, goal_constraint_variance)
# where x_T is the final position
#
# Collision avoidance constraint:
# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)
# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii

Time

Dynamic
DiscreteTime
ModelTimeHorizon=nr_steps

ActInfOntologyAnnotation

dt=TimeStep
gamma=ConstraintParameter
nr_steps=TrajectoryLength
nr_iterations=InferenceIterations
nr_agents=NumberOfAgents
softmin_temperature=SoftminTemperature
A=StateTransitionMatrix
B=ControlInputMatrix
C=ObservationMatrix
initial_state_variance=InitialStateVariance
control_variance=ControlVariance
goal_constraint_variance=GoalConstraintVariance

ModelParameters

nr_agents=4
nr_steps=40
nr_iterations=350

Footer

Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl

Signature

Creator: AI Assistant for GNN
Date: 2024-07-27
Status: Example for RxInfer.jl multi-agent trajectory planning

JSON Files

full_model_data.json

{
  "_HeaderComments": "# GNN Example: RxInfer Multi-agent Trajectory Planning\n# Format: Markdown representation of a Multi-agent Trajectory Planning model for RxInfer.jl\n# Version: 1.0\n# This file is machine-readable and represents the configuration for the RxInfer.jl multi-agent trajectory planning example.",
  "ModelName": "Multi-agent Trajectory Planning",
  "GNNSection": "RxInferMultiAgentTrajectoryPlanning",
  "GNNVersionAndFlags": "GNN v1",
  "ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
  "StateSpaceBlock": "# Model parameters\ndt[1,type=float]               # Time step for the state space model\ngamma[1,type=float]            # Constraint parameter for the Halfspace node\nnr_steps[1,type=int]           # Number of time steps in the trajectory\nnr_iterations[1,type=int]      # Number of inference iterations\nnr_agents[1,type=int]          # Number of agents in the simulation\nsoftmin_temperature[1,type=float] # Temperature parameter for the softmin function\nintermediate_steps[1,type=int] # Intermediate results saving interval\nsave_intermediates[1,type=bool] # Whether to save intermediate results\n\n# State space matrices\nA[4,4,type=float]              # State transition matrix\nB[4,2,type=float]              # Control input matrix\nC[2,4,type=float]              # Observation matrix\n\n# Prior distributions\ninitial_state_variance[1,type=float]    # Prior on initial state\ncontrol_variance[1,type=float]          # Prior on control inputs\ngoal_constraint_variance[1,type=float]  # Goal constraints variance\ngamma_shape[1,type=float]               # Parameters for GammaShapeRate prior\ngamma_scale_factor[1,type=float]        # Parameters for GammaShapeRate prior\n\n# Visualization parameters\nx_limits[2,type=float]            # Plot boundaries (x-axis)\ny_limits[2,type=float]            # Plot boundaries (y-axis)\nfps[1,type=int]                   # Animation frames per second\nheatmap_resolution[1,type=int]    # Heatmap resolution\nplot_width[1,type=int]            # Plot width\nplot_height[1,type=int]           # Plot height\nagent_alpha[1,type=float]         # Visualization alpha for agents\ntarget_alpha[1,type=float]        # Visualization alpha for targets\ncolor_palette[1,type=string]      # Color palette for visualization\n\n# Environment definitions\ndoor_obstacle_center_1[2,type=float]    # Door environment, obstacle 1 center\ndoor_obstacle_size_1[2,type=float]      # Door environment, obstacle 1 size\ndoor_obstacle_center_2[2,type=float]    # Door environment, obstacle 2 center\ndoor_obstacle_size_2[2,type=float]      # Door environment, obstacle 2 size\n\nwall_obstacle_center[2,type=float]      # Wall environment, obstacle center\nwall_obstacle_size[2,type=float]        # Wall environment, obstacle size\n\ncombined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center\ncombined_obstacle_size_1[2,type=float]   # Combined environment, obstacle 1 size\ncombined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center\ncombined_obstacle_size_2[2,type=float]   # Combined environment, obstacle 2 size\ncombined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center\ncombined_obstacle_size_3[2,type=float]   # Combined environment, obstacle 3 size\n\n# Agent configurations\nagent1_id[1,type=int]                   # Agent 1 ID\nagent1_radius[1,type=float]             # Agent 1 radius\nagent1_initial_position[2,type=float]   # Agent 1 initial position\nagent1_target_position[2,type=float]    # Agent 1 target position\n\nagent2_id[1,type=int]                   # Agent 2 ID\nagent2_radius[1,type=float]             # Agent 2 radius\nagent2_initial_position[2,type=float]   # Agent 2 initial position\nagent2_target_position[2,type=float]    # Agent 2 target position\n\nagent3_id[1,type=int]                   # Agent 3 ID\nagent3_radius[1,type=float]             # Agent 3 radius\nagent3_initial_position[2,type=float]   # Agent 3 initial position\nagent3_target_position[2,type=float]    # Agent 3 target position\n\nagent4_id[1,type=int]                   # Agent 4 ID\nagent4_radius[1,type=float]             # Agent 4 radius\nagent4_initial_position[2,type=float]   # Agent 4 initial position\nagent4_target_position[2,type=float]    # Agent 4 target position\n\n# Experiment configurations\nexperiment_seeds[2,type=int]            # Random seeds for reproducibility\nresults_dir[1,type=string]              # Base directory for results\nanimation_template[1,type=string]       # Filename template for animations\ncontrol_vis_filename[1,type=string]     # Filename for control visualization\nobstacle_distance_filename[1,type=string] # Filename for obstacle distance plot\npath_uncertainty_filename[1,type=string]  # Filename for path uncertainty plot\nconvergence_filename[1,type=string]       # Filename for convergence plot",
  "Connections": "# Model parameters\ndt > A\n(A, B, C) > state_space_model\n\n# Agent trajectories\n(state_space_model, initial_state_variance, control_variance) > agent_trajectories\n\n# Goal constraints\n(agent_trajectories, goal_constraint_variance) > goal_directed_behavior\n\n# Obstacle avoidance\n(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance\n\n# Collision avoidance\n(agent_trajectories, nr_agents) > collision_avoidance\n\n# Complete planning system\n(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system",
  "InitialParameterization": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
  "Equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t,  w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t,                v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
  "Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
  "ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance",
  "ModelParameters": "nr_agents=4\nnr_steps=40\nnr_iterations=350",
  "Footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl",
  "Signature": "Creator: AI Assistant for GNN\nDate: 2024-07-27\nStatus: Example for RxInfer.jl multi-agent trajectory planning"
}
full_model_data.json

model_metadata.json

{
  "ModelName": "Multi-agent Trajectory Planning",
  "ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
  "GNNVersionAndFlags": "GNN v1",
  "Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
  "ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance"
}
model_metadata.json

MCP Integration Report (Step 7)

🤖 MCP Integration and API Report

🗓️ Report Generated: 2025-06-06 13:11:16

MCP Core Directory: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/mcp Project Source Root (for modules): /home/trim/Documents/GitHub/GeneralizedNotationNotation/src Output Directory for this report: /home/trim/Documents/GitHub/GeneralizedNotationNotation/output/mcp_processing_step

🌐 Global Summary of Registered MCP Tools

This section lists all tools currently registered with the MCP system, along with their defining module, arguments, and description.

🔬 Core MCP File Check

This section verifies the presence of essential MCP files in the core directory: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/mcp

Status: 5/5 core MCP files found. All core files seem present.

🧩 Functional Module MCP Integration & API Check

Checking for mcp.py in these subdirectories of /home/trim/Documents/GitHub/GeneralizedNotationNotation/src: ['export', 'gnn', 'gnn_type_checker', 'ontology', 'setup', 'tests', 'visualization', 'llm']

Module: export (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/export)


Module: gnn (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn)


Module: gnn_type_checker (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn_type_checker)


Module: ontology (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/ontology)


Module: setup (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/setup)


Module: tests (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/tests)


Module: visualization (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/visualization)


Module: llm (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/llm)


📊 Overall Module Integration Summary

Ontology Processing (Step 8)

🧬 GNN Ontological Annotations Report

📊 Summary of Ontology Processing


��️ Report Generated: 2025-06-06 13:11:17 🎯 GNN Source Directory: src/gnn/examples 📖 Ontology Terms Definition: src/ontology/act_inf_ontology_terms.json (Loaded: 48 terms)


Ontological Annotations for src/gnn/examples/pymdp_pomdp_agent.md

Mappings:

Validation Summary: All ontological terms are recognized.


Ontological Annotations for src/gnn/examples/rxinfer_multiagent_gnn.md

Mappings:

Validation Summary: 12 unrecognized ontological term(s) found.


Rendered Simulators (Step 9)

LLM Processing Outputs (Step 11)

LLM Outputs for pymdp_pomdp_agent: pymdp_pomdp_agent

JSON Files

pymdp_pomdp_agent_comprehensive_analysis.json

{
  "model_purpose": "The model represents a Multifactor PyMDP agent with multiple observation modalities and hidden state factors, aimed at active inference in decision-making scenarios.",
  "key_components": {
    "states": {
      "hidden_states": {
        "reward_level": {
          "states": 2,
          "description": "Represents the level of reward, with two possible states."
        },
        "decision_state": {
          "states": 3,
          "description": "Represents the state of the decision-making process, with three possible states."
        }
      }
    },
    "observations": {
      "state_observation": {
        "outcomes": 3,
        "description": "Observations related to the state of the environment."
      },
      "reward": {
        "outcomes": 3,
        "description": "Observations related to the reward received."
      },
      "decision_proprioceptive": {
        "outcomes": 3,
        "description": "Observations related to the proprioceptive state of decision-making."
      }
    },
    "actions": {
      "decision_state": {
        "actions": 3,
        "description": "Controllable actions that influence the decision state."
      }
    },
    "policies": {
      "policy_vector": {
        "description": "Distribution over actions for the decision state."
      }
    },
    "prior": {
      "factors": {
        "reward_level": {
          "description": "Prior over the hidden states of the reward level."
        },
        "decision_state": {
          "description": "Prior over the hidden states of the decision state."
        }
      }
    }
  },
  "component_interactions": {
    "hidden_states": {
      "interactions": [
        "D_f0 and D_f1 connect to s_f0 and s_f1.",
        "s_f0 and s_f1 influence the A_m matrices (likelihoods).",
        "A_m matrices lead to observations o_m0, o_m1, o_m2.",
        "s_f0, s_f1, and u_f1 influence the B_f matrices (transitions).",
        "B_f matrices determine the next hidden states s_prime_f0 and s_prime_f1.",
        "C_m vectors influence the expected free energy G.",
        "G is used to derive the policy vector \u03c0_f1."
      ]
    }
  },
  "data_types_and_dimensions": {
    "A_matrices": {
      "dimensions": "[3, 2, 3]",
      "type": "float"
    },
    "B_matrices": {
      "B_f0": {
        "dimensions": "[2, 2, 1]",
        "type": "float"
      },
      "B_f1": {
        "dimensions": "[3, 3, 3]",
        "type": "float"
      }
    },
    "C_vectors": {
      "dimensions": "[3]",
      "type": "float"
    },
    "D_vectors": {
      "D_f0": {
        "dimensions": "[2]",
        "type": "float"
      },
      "D_f1": {
        "dimensions": "[3]",
        "type": "float"
      }
    },
    "hidden_states": {
      "s_f0": {
        "dimensions": "[2, 1]",
        "type": "float"
      },
      "s_f1": {
        "dimensions": "[3, 1]",
        "type": "float"
      }
    },
    "observations": {
      "o_m0": {
        "dimensions": "[3, 1]",
        "type": "float"
      },
      "o_m1": {
        "dimensions": "[3, 1]",
        "type": "float"
      },
      "o_m2": {
        "dimensions": "[3, 1]",
        "type": "float"
      }
    },
    "policy": {
      "\u03c0_f1": {
        "dimensions": "[3]",
        "type": "float"
      }
    },
    "action": {
      "u_f1": {
        "dimensions": "[1]",
        "type": "int"
      }
    },
    "expected_free_energy": {
      "G": {
        "dimensions": "[1]",
        "type": "float"
      }
    },
    "time": {
      "t": {
        "dimensions": "[1]",
        "type": "int"
      }
    }
  },
  "potential_applications": [
    "Decision-making frameworks in AI systems",
    "Robotic control systems where multiple sensory modalities are involved",
    "Simulation of cognitive processes in active inference models",
    "Optimization of reward-based learning algorithms"
  ],
  "limitations_or_ambiguities": [
    "The initial parameterization may require further empirical validation.",
    "The model assumes discrete time dynamics, which may not fit all real-world scenarios.",
    "Ambiguities regarding the specific implementation details of the softmax function for action probabilities."
  ],
  "ontology_mapping_assessment": {
    "ActInfOntology_terms": [
      "A_m0, A_m1, A_m2 are mapped to likelihood matrices.",
      "B_f0 and B_f1 are mapped to transition matrices.",
      "C_m0, C_m1, C_m2 are mapped to log preference vectors.",
      "D_f0 and D_f1 are mapped to prior distributions over hidden states.",
      "s_f0, s_f1 are mapped to hidden state factors.",
      "o_m0, o_m1, o_m2 are mapped to observation modalities.",
      "\u03c0_f1 is mapped to policy vector.",
      "u_f1 is mapped to action factor."
    ],
    "relevance": "The terms present are relevant and provide a clear mapping to the components of the model, facilitating understanding and implementation."
  }
}
pymdp_pomdp_agent_comprehensive_analysis.json

pymdp_pomdp_agent_qa.json

[
  {
    "question": "What are the implications of having multiple observation modalities on the agent's decision-making process, and how do these modalities interact with each other?",
    "answer": "The GNN file indicates that the Multifactor PyMDP Agent utilizes multiple observation modalities\u2014specifically \"state_observation,\" \"reward,\" and \"decision_proprioceptive\"\u2014each with three outcomes. The implications of having these multiple modalities on the agent's decision-making process include:\n\n1. **Diverse Information Sources**: The agent can integrate information from different modalities, which may provide a more comprehensive view of the environment. Each modality contributes distinct data that can inform state inference and action selection.\n\n2. **Enhanced State Inference**: The presence of multiple modalities allows the agent to improve the accuracy of hidden state estimations. For example, the likelihood matrices (A_m0, A_m1, A_m2) define how each modality's observations relate to the hidden states, enabling better predictions about the system's current state.\n\n3. **Policy Influence**: The decision-making process is influenced by the preferences defined in the C_vectors corresponding to each modality. These preferences affect the overall expected free energy (G), which in turn impacts the policy distribution (\u03c0_f1) the agent uses to decide on actions.\n\n4. **Interaction Between Modalities**: The GNN structure indicates that the observations from different modalities are interconnected through the state factors and transition matrices. For example, the hidden states (s_f0, s_f1) influence the likelihood of observations (o_m0, o_m1, o_m2) and the transitions between states (B_f0, B_f1). This interaction suggests that the observations are not independent; rather, they collectively shape the agent\u2019s understanding and response to its environment.\n\nThus, multiple observation modalities enrich the decision-making process by providing varied data, enhancing state inference, and allowing for more nuanced policy adjustments through their interdependencies."
  },
  {
    "question": "How does the choice of preference vectors (C_m0, C_m1, C_m2) influence the agent's behavior, particularly in terms of prioritizing different observations?",
    "answer": "The GNN file indicates that the preference vectors (C_m0, C_m1, C_m2) are used to define the agent's preferences for different observation modalities. Specifically:\n\n- **C_m0** (preferences for modality 0) is set to zeros, indicating no preference for observations from this modality.\n- **C_m1** (preferences for modality 1) has values of 1.0 for the first observation, -2.0 for the second, and 0.0 for the third. This suggests a strong preference for the first observation and a strong aversion to the second observation, which would influence the agent to prioritize the first observation when making decisions.\n- **C_m2** (preferences for modality 2) is also set to zeros, indicating no preference for observations from this modality.\n\nIn summary, the choice of preference vectors directly influences the agent's behavior by prioritizing certain observations over others. Specifically, the agent is likely to favor the first observation from modality 1 due to its positive preference, while disregarding observations from modalities 0 and 2."
  },
  {
    "question": "What are the potential consequences of the uncontrolled transition dynamics in factor 0 (B_f0) on the overall performance of the agent?",
    "answer": "The GNN file indicates that factor 0 (B_f0) has uncontrolled transition dynamics, represented by a simple identity matrix for state transitions. This means that the hidden state associated with \"reward_level\" (s_f0) transitions deterministically from one state to another without any influence from actions. \n\nThe potential consequences of these uncontrolled dynamics on the overall performance of the agent may include:\n\n1. **Limited Adaptability**: The agent may struggle to adapt to changes in the environment or tasks since it cannot influence the transitions of the reward level state. This could lead to suboptimal decision-making.\n\n2. **Predictable Behavior**: The deterministic transitions may lead to predictable patterns in the agent's behavior, which could be exploited by adversarial conditions or environments.\n\n3. **Reduced Exploration**: Without control over the transitions, the agent may not explore various states effectively, potentially missing out on better reward structures or strategies.\n\n4. **Dependence on Initial State**: The agent's performance may heavily rely on the initial state of factor 0, as it cannot adjust its trajectory based on feedback from actions or observations.\n\n5. **Inflexibility in Policy Adjustment**: The inability to control transitions may hinder the agent\u2019s ability to refine its policy effectively, limiting its ability to optimize long-term outcomes.\n\nOverall, the uncontrolled nature of transitions in factor 0 could negatively impact the agent's performance, leading to less effective learning and adaptation in dynamic environments."
  },
  {
    "question": "In what ways might the uniform priors (D_f0, D_f1) affect the initial state estimation of the hidden factors, and how could this impact the learning process over time?",
    "answer": "The uniform priors (D_f0 and D_f1) for the hidden state factors indicate that, initially, the agent has no preference or prior belief about the states of these factors. This means that the agent starts with an equal likelihood of being in any state for both the \"reward_level\" (D_f0) and \"decision_state\" (D_f1) factors.\n\nIn terms of initial state estimation, this uniformity can lead to a slower convergence in the learning process. Since the agent is not biased towards any specific state, it may require more observations and interactions with the environment to refine its beliefs about the hidden states based on the received observations. Over time, as the agent gathers more data, it will update its beliefs based on the observations and the transition dynamics, potentially leading to more accurate state estimations. However, the initial lack of preference could hinder quick adaptation to environmental changes or patterns, resulting in longer learning times until the agent's state estimates become reliable. \n\nOverall, while uniform priors provide a neutral starting point, they can slow down the learning process as the agent needs to explore more to discover which states are more likely based on the actual data it receives."
  },
  {
    "question": "How does the expected free energy (G) relate to the agent's overall performance and adaptability in dynamic environments?",
    "answer": "The GNN file does not provide sufficient information to explicitly explain how the expected free energy (G) relates to the agent's overall performance and adaptability in dynamic environments. While G is defined as the Expected Free Energy and is connected to the policy (\u03c0_f1) and the preferences (C_m0, C_m1, C_m2), the document does not detail the mechanisms or theories linking G to performance and adaptability. Thus, a direct relationship cannot be established based solely on the content of the GNN file."
  }
]
pymdp_pomdp_agent_qa.json

Text/Log Files

pymdp_pomdp_agent_summary.txt

### Summary of the GNN Model: Multifactor PyMDP Agent v1

**Model Name:** Multifactor PyMDP Agent v1

**Purpose:** This model represents a multifactor PyMDP (Partially Observable Markov Decision Process) agent designed for active inference, incorporating multiple observation modalities and hidden state factors. It aims to facilitate decision-making and state inference in dynamic environments.

**Key Components:**

1. **Observation Modalities:**
   - **State Observation:** 3 outcomes
   - **Reward:** 3 outcomes
   - **Decision Proprioceptive:** 3 outcomes

2. **Hidden State Factors:**
   - **Reward Level:** 2 states
   - **Decision State:** 3 states (controllable with 3 possible actions)

3. **State and Transition Matrices:**
   - **Likelihood Matrices (A_m):** Define the relationship between observations and hidden states for each modality.
     - A_m0, A_m1, A_m2 represent the likelihoods for each observation modality respectively.
   - **Transition Matrices (B_f):** Describe the state transitions for hidden states.
     - B_f0 for the reward level (uncontrolled).
     - B_f1 for the decision state (controlled with actions).

4. **Preference and Prior Vectors:**
   - **Preference Vectors (C_m):** Indicate the preferences for each observation modality.
   - **Prior Vectors (D_f):** Define the initial assumptions about the hidden states.

5. **Hidden States:**
   - **s_f0:** Hidden state for the reward level.
   - **s_f1:** Hidden state for the decision state.
   - **Next Hidden States:** s_prime_f0, s_prime_f1 represent the next states in the process.

6. **Policy and Control:**
   - Policy vector (π_f1) for decision-making, which is influenced by the expected free energy (G) and the chosen action (u_f1).

**Main Connections:**
- Hidden states (s_f0, s_f1) are connected to their respective likelihood matrices (A_m).
- The likelihood matrices lead to observations (o_m).
- The control action (u_f1) influences the transition matrix B_f1, while B_f0 remains unaffected.
- The expected free energy (G) is derived from the preference vectors and influences the policy vector (π_f1).

This model is structured to dynamically infer states, policies, and actions over an unbounded time horizon, reflecting a sophisticated approach to decision-making in environments with partial observability.
pymdp_pomdp_agent_summary.txt

LLM Outputs for rxinfer_multiagent_gnn: rxinfer_multiagent_gnn

Text/Log Files

rxinfer_multiagent_gnn_summary.txt

### Summary of the GNN Model: Multi-agent Trajectory Planning

**Model Name:** Multi-agent Trajectory Planning

**Purpose:** 
The model is designed for simulating multi-agent trajectory planning in a 2D environment using RxInfer.jl. It incorporates various constraints to facilitate agents' movement while avoiding obstacles and ensuring safety during interactions with other agents.

**Key Components:**

1. **State Space Model:**
   - **Parameters:**
     - Time step (`dt`)
     - State transition matrix (`A`)
     - Control input matrix (`B`)
     - Observation matrix (`C`)
   - **State Dynamics:** Describes how agents evolve over time based on their control inputs and noise.

2. **Observations:**
   - Initial states and trajectories of agents are observed through the matrices that map the state to the observations.

3. **Constraints:**
   - **Obstacle Avoidance:** Ensures agents avoid predefined obstacles based on their positions relative to the obstacles.
   - **Goal-Directed Behavior:** Guides agents toward their target positions while accounting for noise in the observations.
   - **Inter-Agent Collision Avoidance:** Prevents agents from getting too close to each other based on their physical radii.

**Main Connections:**
- The state space model provides the foundation upon which agent trajectories are computed.
- Agent trajectories are influenced by initial state variances and control variances.
- Goal and obstacle constraints directly affect the agents' decision-making processes.
- All components converge into a complete planning system that orchestrates the agents' behavior in the environment.

This GNN model effectively integrates the dynamics of multiple agents navigating a space with obstacles, ensuring safe and efficient path planning while adhering to various constraints.
rxinfer_multiagent_gnn_summary.txt

Pipeline Log

Other Output Files/Directories